mass atrocity
From prosthetic memory to prosthetic denial: Auditing whether large language models are prone to mass atrocity denialism
Ulloa, Roberto, Zucker, Eve M., Bultmann, Daniel, Simon, David J., Makhortykh, Mykola
The proliferation of large language models (LLMs) can influence how historical narratives are disseminated and perceived. This study explores the implications of LLMs' responses on the representation of mass atrocity memory, examining whether generative AI systems contribute to prosthetic memory, i.e., mediated experiences of historical events, or to what we term "prosthetic denial," the AI-mediated erasure or distortion of atrocity memories. We argue that LLMs function as interfaces that can elicit prosthetic memories and, therefore, act as experiential sites for memory transmission, but also introduce risks of denialism, particularly when their outputs align with contested or revisionist narratives. To empirically assess these risks, we conducted a comparative audit of five LLMs (Claude, GPT, Llama, Mixtral, and Gemini) across four historical case studies: the Holodomor, the Holocaust, the Cambodian Genocide, and the genocide against the Tutsis in Rwanda. Each model was prompted with questions addressing common denialist claims in English and an alternative language relevant to each case (Ukrainian, German, Khmer, and French). Our findings reveal that while LLMs generally produce accurate responses for widely documented events like the Holocaust, significant inconsistencies and susceptibility to denialist framings are observed for more underrepresented cases like the Cambodian Genocide. The disparities highlight the influence of training data availability and the probabilistic nature of LLM responses on memory integrity. We conclude that while LLMs extend the concept of prosthetic memory, their unmoderated use risks reinforcing historical denialism, raising ethical concerns for (digital) memory preservation, and potentially challenging the advantageous role of technology associated with the original values of prosthetic memory.
- Leisure & Entertainment (0.67)
- Media > News (0.47)
- Government > Regional Government (0.46)
- Media > Film (0.46)
AI apocalypse: Ex-Google worker fears 'killer robots' could cause 'mass atrocities'
"Although I was not directly involved in speeding up the video footage recognition I realised that I was still part of the kill chain; that this would ultimately lead to more people being targeted and killed by the US military in places like Afghanistan." The former Google engineer predicts autonomous weapons currently in development pose a far greater risk to humanity than remote-controlled drones. She outlined how external forces ranging from changing weather systems to machines being unable to work out complex human behaviour might throw killer robots off course, with potentially fatal consequences. She told The Guardian: "You could have a scenario where autonomous weapons that have been sent out to do a job confront unexpected radar signals in an area they are searching; there could be weather that was not factored into its software or they come across a group of armed men who appear to be insurgent enemies but in fact are out with guns hunting for food. "The machine doesn't have the discernment or common sense that the human touch has.
Ex-Google Worker Fears 'Mass Atrocities' Caused by Killer Robots
Increasingly sophisticated killer AI robots and machines could accidentally start a war and lead to mass atrocities, an ex-Google worker has told The Guardian. Laura Nolan resigned from Google last year in protest at being assigned to Project Maven, which was aimed at enhancing U.S. military drone technology. She has called for all unmanned autonomous weapons to be banned. AI killer robots have the potential to do "calamitous things that they were not originally programmed for," Nolan explained to the Guardian. She is part of a growing group of experts that are showing concern over the development of artificial intelligence programmed into war machines.
- North America > United States > New York (0.07)
- Asia > Japan (0.07)
Ex-Google worker fears 'killer robots' could cause mass atrocities
A new generation of autonomous weapons or "killer robots" could accidentally start a war or cause mass atrocities, a former top Google software engineer has warned. Laura Nolan, who resigned from Google last year in protest at being sent to work on a project to dramatically enhance US military drone technology, has called for all AI killing machines not operated by humans to be banned. Nolan said killer robots not guided by human remote control should be outlawed by the same type of international treaty that bans chemical weapons. Unlike drones, which are controlled by military teams often thousands of miles away from where the flying weapon is being deployed, Nolan said killer robots have the potential to do "calamitous things that they were not originally programmed for". There is no suggestion that Google is involved in the development of autonomous weapons systems.
- Europe > Russia (0.06)
- Asia > Russia (0.06)
- North America > United States > New York (0.05)
- (3 more...)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.93)
Ex-Google worker fears 'killer robots' could cause mass atrocities
A new generation of autonomous weapons or "killer robots" could accidentally start a war or cause mass atrocities, a former top Google software engineer has warned. Laura Nolan, who resigned from Google last year in protest at being sent to work on a project to dramatically enhance US military drone technology, has called for all AI killing machines not operated by humans to be banned. Nolan said killer robots not guided by human remote control should be outlawed by the same type of international treaty that bans chemical weapons. Unlike drones, which are controlled by military teams often thousands of miles away from where the flying weapon is being deployed, Nolan said killer robots have the potential to do "calamitous things that they were not originally programmed for". Nolan, who has joined the Campaign to Stop Killer Robots and has briefed UN diplomats in New York and Geneva over the dangers posed by autonomous weapons, said: "The likelihood of a disaster is in proportion to how many of these machines will be in a particular area at once. What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed. "There could be large-scale accidents because these things will start to behave in unexpected ways.
- North America > United States > New York (0.25)
- Europe > Russia (0.06)
- Asia > Russia (0.06)
- (3 more...)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.93)